Making risk minimization tolerant to label noise
نویسندگان
چکیده
منابع مشابه
Making risk minimization tolerant to label noise
In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform l...
متن کاملRisk Minimization in the Presence of Label Noise
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on th...
متن کاملLabel Noise-Tolerant Hidden Markov Models for Segmentation: Application to ECGs
The performance of traditional classification models can adversely be impacted by the presence of label noise in training observations. The pioneer work of Lawrence and Schölkopf tackled this issue in datasets with independent observations by incorporating a statistical noise model within the inference algorithm. In this paper, the specific case of label noise in non-independent observations is...
متن کاملA comprehensive introduction to label noise
In classification, it is often difficult or expensive to obtain completely accurate and reliable labels. Indeed, labels may be polluted by label noise, due to e.g. insufficient information, expert mistakes, and encoding errors. The problem is that errors in training labels that are not properly handled may deteriorate the accuracy of subsequent predictions, among other effects. Many works have ...
متن کاملNoise Tolerant Inductive Learning
A novel iterative noise reduction learning algorithm is presented in which rules are learned in two phases. The ftrst phase improves the quality of training data through concept-driven closed-loop filtration process. In the second phase, classification rules are relearned from the filtered training dataset.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2015
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2014.09.081